Heterogeneity in parallel and distributed computing
نویسنده
چکیده
Heterogeneity is one of the most profound and challenging features of today’s parallel and distributed computing systems. From the macro level, where networks of distributed computers, composed by diverse node architectures, are interconnected with potentially heterogeneous networks, to themicro level,where deeper memory hierarchies, heterogeneousmulticores, and various accelerator architectures are increasingly common, the impact of heterogeneity on all computing tasks is increasing rapidly. This special issue on heterogeneity in parallel and distributed computing is inspired by the great success of the 21st International Heterogeneity in Computing Workshop (HCW 2012, held in Shanghai, China, in May 2012, in conjunction with IPDPS), which attracted 41 high-quality submissions, 17 of which were accepted for presentation and publication in the workshop proceedings. All submissions to this special issue, both extended versions of some HCW 2012 papers and many external original manuscripts, were rigorously reviewed by top-quality experts in the field. The result of the collective efforts of the reviewers and all the authors that submitted their work is this issue, featuring 15 accepted papers. They cover a wide range of topics from multicore processor architecture to scheduling algorithms. A brief outline of the contents of each paper is given in the following paragraphs. The first two papers evaluate two recent on-chip multiprocessor architectures. In the paper ‘‘Design space exploration of on-chip ring interconnection for a CPU–GPU heterogeneous architecture’’, Jaekyu Lee et al. study a perspective heterogeneous chip multiprocessor architecture integrating on the same chip both CPU andGPU cores. In this architecture, the on-chip interconnection network is used to control the access to the resources shared by the CPU and GPU cores. The study focuses on the impact of this interconnection network on the overall performance of the architecture when CPU and GPGPU applications run simultaneously, and suggests an optimal ring interconnection network. In the paper ‘‘Sparsematrix–vectormultiplication on the singlechip Cloud computer many-core processor’’, Juan C. Pichel and Francisco F. Rivera study the performance potential and power efficiency of an experimental 48-core Intel processor, Single-Chip Cloud Computer (SCC), for execution of such an irregular application as sparse matrix–vector multiplication. The next paper, ‘‘Energy-efficientmultithreading for a hierarchical heterogeneous multicore through locality–cognizant thread’’, by Patrick Anthony La Fratta and Peter M Kogge, deals with a novel heterogeneous multicore processor architecture, passive/ active multicore, proposed by the authors in their previous publications. In this paper, the authors present energy-efficient multithreading techniques for this architecture. A compute node consisting of a multicore CPU and a GPU accelerator as well as HPC systems built from such compute nodes are getting more and more common. The next three papers study
منابع مشابه
Green Energy-aware task scheduling using the DVFS technique in Cloud Computing
Nowdays, energy consumption as a critical issue in distributed computing systems with high performance has become so green computing tries to energy consumption, carbon footprint and CO2 emissions in high performance computing systems (HPCs) such as clusters, Grid and Cloud that a large number of parallel. Reducing energy consumption for high end computing can bring various benefits such as red...
متن کاملStatic Task Allocation in Distributed Systems Using Parallel Genetic Algorithm
Over the past two decades, PC speeds have increased from a few instructions per second to several million instructions per second. The tremendous speed of today's networks as well as the increasing need for high-performance systems has made researchers interested in parallel and distributed computing. The rapid growth of distributed systems has led to a variety of problems. Task allocation is a...
متن کاملImproving the palbimm scheduling algorithm for fault tolerance in cloud computing
Cloud computing is the latest technology that involves distributed computation over the Internet. It meets the needs of users through sharing resources and using virtual technology. The workflow user applications refer to a set of tasks to be processed within the cloud environment. Scheduling algorithms have a lot to do with the efficiency of cloud computing environments through selection of su...
متن کاملParallel computing using MPI and OpenMP on self-configured platform, UMZHPC.
Parallel computing is a topic of interest for a broad scientific community since it facilitates many time-consuming algorithms in different application domains.In this paper, we introduce a novel platform for parallel computing by using MPI and OpenMP programming languages based on set of networked PCs. UMZHPC is a free Linux-based parallel computing infrastructure that has been developed to cr...
متن کاملCloud Computing Technology Algorithms Capabilities in Managing and Processing Big Data in Business Organizations: MapReduce, Hadoop, Parallel Programming
The objective of this study is to verify the importance of the capabilities of cloud computing services in managing and analyzing big data in business organizations because the rapid development in the use of information technology in general and network technology in particular, has led to the trend of many organizations to make their applications available for use via electronic platforms hos...
متن کاملUtilizing Heterogeneous Networks in Distributed Parallel Computing Systems
Heterogeneity is becoming quite common in distributed parallel computing systems, both in processor architectures and in communication networks. For example, many distributed systems are now being constructed using a variety of diierent communication networks, such as Ethernet, ATM, Fibre Channel, and HiPPI, within a single system. In addition to this hardware heterogeneity, there is a heteroge...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- J. Parallel Distrib. Comput.
دوره 73 شماره
صفحات -
تاریخ انتشار 2013